Stochastic Approximation and Multilayer Perceptrons: The Gain Backpropagation Algorithm

نویسندگان

  • Peter J. Gawthrop
  • Daniel G. Sbarbaro-Hofer
چکیده

A standard general algorithm, the stochastic approximation algorithm of Albert and Gardner [1] , is applied in a new context to compute the weights of a multilayer per ceptron network. This leads to a new algorithm, the gain backpropagation algorithm, which is related to, but significantly different from, the standard backpropagat ion algorith m [2]. Some simulation examples show the potential and limitations of the proposed approach and provide comparisons with the conventional backpropagation algorithm.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Are Rosenblatt multilayer perceptrons more powerfull than sigmoidal multilayer perceptrons? From a counter example to a general result

In the eighties the problem of the lack of an efficient algorithm to train multilayer Rosenblatt perceptrons was solved by sigmoidal neural networks and backpropagation. But should we still try to find an efficient algorithm to train multilayer hardlimit neuronal networks, a task known as a NP-Complete problem? In this work we show that this would not be a waste of time by means of a counter ex...

متن کامل

neuralnet: Training of Neural Networks

Artificial neural networks are applied in many situations. neuralnet is built to train multi-layer perceptrons in the context of regression analyses, i.e. to approximate functional relationships between covariates and response variables. Thus, neural networks are used as extensions of generalized linear models. neuralnet is a very flexible package. The backpropagation algorithm and three versio...

متن کامل

Discrete All-positive Multilayer Perceptrons for Optical Implementation Discrete All-positive Multilayer Perceptrons for Optical Implementation

All-optical multilayer perceptrons diier in various ways from the ideal neural network model. Examples are the use of non-ideal activation functions which are truncated, asymmetric, and have a non-standard gain, restriction of the network parameters to non-negative values, and the limited accuracy of the weights. In this paper, a backpropagation-based learning rule is presented that compensates...

متن کامل

A Fast and Convergent Stochastic Learning Algorithm for MLP

We propose a stochastic learning algorithm for multilayer perceptrons of linearthreshold function units, which theoretically converges with probability one and experimentally (for the three-layer network case) exhibits 100% convergence rate and remarkable speed on parity and simulated problems. On the parity problems (to realize the n bit parity function by n (minimal) hidden units) the algorit...

متن کامل

An ]Efficient Multilayer Quadratic Perceptron for Pattern Classification and Function Approximation

Abs t rac t : W e propose an architecture of a multilayer quadratic perceptron (MLQP) that combines advantages of multilayer perceptrons(MLPs) and higher-order feedforward neural networks. The features of MLQP are in its simple structure, practical number of adjustable connection weights and powerful learning ability. I n this paper, the architecture of MLQP is described, a backpropagation lear...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Complex Systems

دوره 4  شماره 

صفحات  -

تاریخ انتشار 1990